Goto

Collaborating Authors

 cognitive style


Contrasting Cognitive Styles in Vision-Language Models: Holistic Attention in Japanese Versus Analytical Focus in English

Sabir, Ahmed, Gasper, Azinovič, Loem, Mengsay, Sharma, Rajesh

arXiv.org Artificial Intelligence

Cross-cultural research in perception and cognition has shown that individuals from different cultural backgrounds process visual information in distinct ways. East Asians, for example, tend to adopt a holistic perspective, attending to contextual relationships, whereas Westerners often employ an analytical approach, focusing on individual objects and their attributes. In this study, we investigate whether Vision-Language Models (VLMs) trained predominantly on different languages, specifically Japanese and English, exhibit similar culturally grounded attentional patterns. Using comparative analysis of image descriptions, we examine whether these models reflect differences in holistic versus analytic tendencies. Our findings suggest that VLMs not only internalize the structural properties of language but also reproduce cultural behaviors embedded in the training data, indicating that cultural cognition may implicitly shape model outputs.


Sectoral Coupling in Linguistic State Space

Dumbrava, Sebastian

arXiv.org Artificial Intelligence

This work presents a formal framework for quantifying the internal dependencies between functional subsystems within artificial agents whose belief states are composed of structured linguistic fragments. Building on the Semantic Manifold framework, which organizes belief content into functional sectors and stratifies them across hierarchical levels of abstraction, we introduce a system of sectoral coupling constants that characterize how one cognitive sector influences another within a fixed level of abstraction. The complete set of these constants forms an agent-specific coupling profile that governs internal information flow, shaping the agent's overall processing tendencies and cognitive style. We provide a detailed taxonomy of these intra-level coupling roles, covering domains such as perceptual integration, memory access and formation, planning, meta-cognition, execution control, and affective modulation. We also explore how these coupling profiles generate feedback loops, systemic dynamics, and emergent signatures of cognitive behavior. Methodologies for inferring these profiles from behavioral or internal agent data are outlined, along with a discussion of how these couplings evolve across abstraction levels. This framework contributes a mechanistic and interpretable approach to modeling complex cognition, with applications in AI system design, alignment diagnostics, and the analysis of emergent agent behavior.


Capturing Human Cognitive Styles with Language: Towards an Experimental Evaluation Paradigm

Varadarajan, Vasudha, Mahwish, Syeda, Liu, Xiaoran, Buffolino, Julia, Luhmann, Christian C., Boyd, Ryan L., Schwartz, H. Andrew

arXiv.org Artificial Intelligence

While NLP models often seek to capture cognitive states via language, the validity of predicted states is determined by comparing them to annotations created without access the cognitive states of the authors. In behavioral sciences, cognitive states are instead measured via experiments. Here, we introduce an experiment-based framework for evaluating language-based cognitive style models against human behavior. We explore the phenomenon of decision making, and its relationship to the linguistic style of an individual talking about a recent decision they made. The participants then follow a classical decision-making experiment that captures their cognitive style, determined by how preferences change during a decision exercise. We find that language features, intended to capture cognitive style, can predict participants' decision style with moderate-to-high accuracy (AUC ~ 0.8), demonstrating that cognitive style can be partly captured and revealed by discourse patterns.


Transferring Domain Knowledge with (X)AI-Based Learning Systems

Spitzer, Philipp, Kühl, Niklas, Goutier, Marc, Kaschura, Manuel, Satzger, Gerhard

arXiv.org Artificial Intelligence

In numerous high-stakes domains, training novices via conventional learning systems does not suffice. To impart tacit knowledge, experts' hands-on guidance is imperative. However, training novices by experts is costly and time-consuming, increasing the need for alternatives. Explainable artificial intelligence (XAI) has conventionally been used to make black-box artificial intelligence systems interpretable. In this work, we utilize XAI as an alternative: An (X)AI system is trained on experts' past decisions and is then employed to teach novices by providing examples coupled with explanations. In a study with 249 participants, we measure the effectiveness of such an approach for a classification task. We show that (X)AI-based learning systems are able to induce learning in novices and that their cognitive styles moderate learning. Thus, we take the first steps to reveal the impact of XAI on human learning and point AI developers to future options to tailor the design of (X)AI-based learning systems.


On the Perception of Difficulty: Differences between Humans and AI

Spitzer, Philipp, Holstein, Joshua, Vössing, Michael, Kühl, Niklas

arXiv.org Artificial Intelligence

With the increased adoption of artificial intelligence (AI) in industry and society, effective human-AI interaction systems are becoming increasingly important. A central challenge in the interaction of humans with AI is the estimation of difficulty for human and AI agents for single task instances. These estimations are crucial to evaluate each agent's capabilities and, thus, required to facilitate effective collaboration. So far, research in the field of human-AI interaction estimates the perceived difficulty of humans and AI independently from each other. However, the effective interaction of human and AI agents depends on metrics that accurately reflect each agent's perceived difficulty in achieving valuable outcomes. Research to date has not yet adequately examined the differences in the perceived difficulty of humans and AI. Thus, this work reviews recent research on the perceived difficulty in human-AI interaction and contributing factors to consistently compare each agent's perceived difficulty, e.g., creating the same prerequisites. Furthermore, we present an experimental design to thoroughly examine the perceived difficulty of both agents and contribute to a better understanding of the design of such systems.


AI -- A Personalization Engine On Steroids

#artificialintelligence

Marketing has come a long way since the days of John Wanamaker and his famous complaint that he didn't know which half of his marketing spend was useful and which wasn't. However, as senior Forbes contributor George Bradt contends in his article, Wanamaker Was Wrong -- The Vast Majority Of Advertising Is Wasted, attribution is extremely difficult to measure, and brands would be smarter to try to spot their most loyal customers, rather than try to figure out the exact steps that should be attributed to a purchase. Although there are plenty of tools available to collect purchasing behavior, piecing together a somewhat reliable path-to-purchase is not easy, and Bradt believes the money is better spent both finding loyal customers and providing those customers with a personalized experience they will learn to covet. Today, personalization is becoming the optimum word in a radically-new customer experience environment. In her article 3 AI-driven strategies for retailers in 2019, Giselle Abramovich states "Personalization is table stakes for today's retailers, who are increasingly competing to be relevant in the hearts and minds of shoppers."


Website morphing and more revolutions in marketing

#artificialintelligence

John R. Hauser is the Kirin Professor of Marketing at M.I.T.'s Sloan School of Management where he teaches new product development, marketing management, and statistical and research methodology. He has served MIT as Head of the MIT Marketing Group, Head of the Management Science Area, Research Director of the Center for Innovation in Product Development, and co-Director of the International Center for Research on the Management of Technology.He is the co-author of two textbooks, Design and Marketing of New Products and Essentials of New Product Management, and a former editor of Marketing Science (now on the advisory board).I think it wouldn't be smart to start this interview with something as dull and complex as a definition. Or am I the only one that likes to read light weight and short articles? Let's just get it over with. "Website morphing matches the look and feel of a website to each customer so that, over a series of customers, revenue or profit are maximized."